25 research outputs found

    Attractiveness and distinctiveness between speakers' voices in naturalistic speech and their faces are uncorrelated

    Get PDF
    honest signal hypothesis, attractiveness, averageness, face, distinctiveness, voic

    Discrimination in the workplace, reported by people with major depressive disorder: a cross-sectional study in 35 countries.

    Get PDF
    OBJECTIVE: Whereas employment has been shown to be beneficial for people with Major Depressive Disorder (MDD) across different cultures, employers' attitudes have been shown to be negative towards workers with MDD. This may form an important barrier to work participation. Today, little is known about how stigma and discrimination affect work participation of workers with MDD, especially from their own perspective. We aimed to assess, in a working age population including respondents with MDD from 35 countries: (1) if people with MDD anticipate and experience discrimination when trying to find or keep paid employment; (2) if participants in high, middle and lower developed countries differ in these respects; and (3) if discrimination experiences are related to actual employment status (ie, having a paid job or not). METHOD: Participants in this cross-sectional study (N=834) had a diagnosis of MDD in the previous 12 months. They were interviewed using the Discrimination and Stigma Scale (DISC-12). Analysis of variance and generalised linear mixed models were used to analyse the data. RESULTS: Overall, 62.5% had anticipated and/or experienced discrimination in the work setting. In very high developed countries, almost 60% of respondents had stopped themselves from applying for work, education or training because of anticipated discrimination. Having experienced workplace discrimination was independently related to unemployment. CONCLUSIONS: Across different countries and cultures, people with MDD very frequently reported discrimination in the work setting. Effective interventions are needed to enhance work participation in people with MDD, focusing simultaneously on decreasing stigma in the work environment and on decreasing self-discrimination by empowering workers with MDD

    Neural Correlates of Voice Learning with Distinctive and Non-Distinctive Faces

    No full text
    Recognizing people from their voices may be facilitated by a voice’s distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice learning. Based on evidence for audiovisual integration in the recognition of familiar people, we studied the behavioral and electrophysiological correlates of voice learning associated with distinctive or non-distinctive faces. We repeated twelve unfamiliar voices uttering short sentences, together with either distinctive or non-distinctive faces (depicted before and during voice presentation) in six learning-test cycles. During learning, distinctive faces increased early visually-evoked (N170, P200, N250) potentials relative to non-distinctive faces, and face distinctiveness modulated voice-elicited slow EEG activity at the occipito–temporal and fronto-central electrodes. At the test, unimodally-presented voices previously learned with distinctive faces were classified more quickly than were voices learned with non-distinctive faces, and also more quickly than novel voices. Moreover, voices previously learned with faces elicited an N250-like component that was similar in topography to that typically observed for facial stimuli. The preliminary source localization of this voice-induced N250 was compatible with a source in the fusiform gyrus. Taken together, our findings provide support for a theory of early interaction between voice and face processing areas during both learning and voice recognition

    The Jena Voice Learning and Memory Test (JVLMT): A standardized tool for assessing the ability to learn and recognize voices

    No full text
    Humble D, Schweinberger SR, Mayer A, Dobel C, Zäske R. The Jena Voice Learning and Memory Test (JVLMT): A standardized tool for assessing the ability to learn and recognize voices. PsyArXiv. 2021.The ability to recognize someone’s voice exists on a broad spectrum with phonagnosia on the low end and super recognition at the high end. Yet there is no standardized test to measure an individual’s ability of learning and recognizing newly-learnt voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 22min-test based on item response theory and applicable across languages. The JVLMT consists of three phases in which participants first become familiarized with eight speakers and then perform a three-alternative forced choice recognition task, using pseudo sentences devoid of semantics. Acoustic (dis)similarity analyses were used to create items with different levels of difficulty. Test scores are based on 22 Rasch-conform items. Items were selected based on 232 and validated based on 454 participants in an online study. Mean accuracy is 0.51 with an SD of .18. The JVLMT showed high and moderate correlations with the convergent validation tests (Bangor Voice Matching Test; Glasgow Voice Memory Test, respectively) and a weak correlation with the discriminant validation test (Digit Span). Empirical (marginal) reliability is 0.66. Four participants with super recognition abilities and 7 participants with phonagnosia were identified (at least 2 SDs above or below the mean, respectively).The JVLMT is a promising diagnostic tool to screen for voice recognition abilities in a scientific and neuropsychological context

    Bimodal learning effect depicted as mean d’- differences between face-voice (FV) and voice-only (V) modality conditions for pairs of consecutive study-test cycles in Exp.

    No full text
    <p>1 (static faces) and Exp. 2 (dynamic faces). Note that increasing benefits of bimodal learning from cycle pairs 1_2 towards 5_6 were independent of face animation mode (static vs. dynamic). Error bars are standard errors of the mean (SEM).</p

    Mean d’ (±SEM) for the factors face animation mode (static vs. dynamic), learning modality (face-voice [FV] vs. voice [V]), and cycle pairs (1_2; 3_4; 5_6).

    No full text
    <p>Mean d’ (±SEM) for the factors face animation mode (static vs. dynamic), learning modality (face-voice [FV] vs. voice [V]), and cycle pairs (1_2; 3_4; 5_6).</p

    Top: Trial procedure for the study phases depicted for the voice-only block (V) and the face-voice block (FV).

    No full text
    <p>Bottom: Trial procedure in the test phases following V and FV learning. Note that face stimuli were static pictures in Exp. 1 and dynamic videos in Exp. 2. The individual depicted in this figure has given written informed consent (as outlined in PLOS consent form) to publish this photograph.</p

    Mean proportion of correct responses to studied vs. novel voices depicted for pairs of consecutive study-test cycles.

    No full text
    <p>Note that data are collapsed across learning modality (FV and V) and animation mode (static [Exp. 1] and dynamic [Exp. 2]). Error bars are SEM.</p

    Implicit memory for content and speaker of messages heard during slow-wave sleep

    No full text
    Although sleep is a state of unconsciousness, the sleeping brain does not completely cease to process external events. In fact, our brain is able to distinguish between sensical and nonsensical messages and can even learn contingencies between non-verbal events while asleep. Here, we asked whether sleeping humans can encode new verbal messages, learn voices of unfamiliar speakers, and form associations between speakers and messages. To this aim, we presented 28 sentences uttered by 28 unfamiliar speakers to participants who were in EEG-defined slow-wave sleep. After waking, participants performed three tests which assessed recognition of sleep-played speakers, messages, and speaker-message associations. Recognition accuracy in all tests was at chance level, suggesting sleep-played stimuli were not learned. However, response latencies were significantly shorter for correct vs. incorrect decisions in the message recognition test, indicating implicit memory for sleep-played messages (but not for speakers or speaker-message combinations). Furthermore, participants with excellent implicit memory for sleep-played messages also displayed implicit memory for speakers (but not speaker-message associations), as suggested by the significant correlation between response-latency-differences for recognition of messages and speakers. Implicit memory for speakers was verified by EEG at test: listening to sleep-played vs. new speakers evoked a late centro-parietal negativity. Event-related EEG recorded during sleep revealed that peaks resembling up-states of sleep slow-waves contributed to sleep-learning. Participants with larger evoked slow-wave peaks later showed stronger implicit memory. Overall, humans appear to be able to implicitly learn semantic content and speakers of sleep-played messages. These forms of sleep-learning are mediated by slow-waves
    corecore